UA Workshop: Python for R users

Author

Johannes B. Gruber

Outline

  1. Why combine Python with R?
  2. Getting started
  3. Workflow
  4. Example 1: spaCy
  5. Example 2: NMF Topic Models from scikit-learn
  6. Example 3: BERTopic
  7. Example 4: Supervised Learning with RoBERTa
  8. Example 5: Zero-Shot Classification

Why combine Python with R?

Why not just switch to Python?

  1. If you’re here, you probably already know R so why re-learn things from scratch?
  2. R is a programming language specifically for statistics with some great built-in functionality that you would miss in Python.
  3. R has absolutely outstanding packages for data science with no drop-in replacement in Python (e.g., ggplot2, dplyr, tidytext).

Why not just stick with R then?

  1. Newer models and methods in machine learning are often Python only (as advancements are made by big companies who rely on Python)
  2. You might want to collaborate with someone who uses Python and need to run their code
  3. Learning a new (programming) language is always good to extend your skills (also in your the language(s) you already know)

Getting started

We start by installing the necessary Python packages, for which you should use a virtual environment (so we set that one up first).

Create a Virtual Environment

Before you load reticulate for the first time, we need to create a virtual environment. This is a folder in your project directory with a link to Python and you the packages you want to use in this project. Why?

  • Packages (or their dependencies) on the Python Package Index can be incompatible with each other – meaning you can break things by updating.

  • Your operating system might keep older versions of some packages around, which you means you could break your OS by and accidental update!

  • This also adds to projects being reproducible on other systems, as you keep track of the specific version of each package used in your project (you could do this in R with the renv package).

To grab the correct version of Python to link to in virtual environment:

if (R.Version()$os == "mingw32") {
  system("where python")
} else {
  system("whereis python")
}

I choose the main Python installation in “/usr/bin/python” and use it as the base for a virtual environment. If you don’t have any Python version on your system, you can install one with reticulate::install_miniconda().

# I build in this if condition to not accidentally overwirte the environment when rerunnig the notebook
if (!reticulate::virtualenv_exists(envname = "./python-env/")) {
  reticulate::virtualenv_create("./python-env/", python = "/usr/bin/python")
}
reticulate::virtualenv_exists(envname = "./python-env/")
[1] TRUE

reticulate is supposed to automatically pick this up when started, but to make sure, I set the environment variable RETICULATE_PYTHON to the binary of Python in the new environment:

if (R.Version()$os == "mingw32") {
  python_path <- file.path(getwd(), "/python-env/Scripts/python.exe")
} else {
  python_path <- file.path(getwd(), "/python-env/bin/python")
}
file.exists(python_path)
[1] TRUE
Sys.setenv(RETICULATE_PYTHON = python_path)

Optional: make this persist restarts of RStudio by saving the environment variable into an .Renviron file (otherwise the Sys.setenv() line above needs to be in every script):

# open the .Renviron file
usethis::edit_r_environ(scope = "project")
# or directly append it with the neccesary line
readr::write_lines(
  x = paste0("RETICULATE_PYTHON=", python_path),
  file = ".Renviron",
  append = TRUE
)

Now reticulate should now pick up the correct binary in the project folder:

library(reticulate)
py_config()
python:         /mnt/data/Dropbox/Teaching/reticulate_workshop/python-env/bin/python
libpython:      /usr/lib/libpython3.10.so
pythonhome:     /mnt/data/Dropbox/Teaching/reticulate_workshop/python-env:/mnt/data/Dropbox/Teaching/reticulate_workshop/python-env
version:        3.10.9 (main, Dec 19 2022, 17:35:49) [GCC 12.2.0]
numpy:          /mnt/data/Dropbox/Teaching/reticulate_workshop/python-env/lib/python3.10/site-packages/numpy
numpy_version:  1.23.5

NOTE: Python version was forced by RETICULATE_PYTHON

Installing Packages

reticulate::py_install() installs package similar to install.packages(). Let’s install the packages we need:

reticulate::py_install(c("spacy",
                         "scikit-learn",
                         "pandas",
                         "bertopic",
                         "sentence_transformers",
                         "simpletransformers"))

But there are some caveats:

  • not all packages can be installed with the name you see in scripts (e.g.,to install the package, call “scikit-learn”, to load it you need sklearn)
  • you might need a specific version of a package to follow a specific tutorial
  • there can be different flavours of the same package (e.g., bertopic, bertopic[gensim], bertopic[spacy])
  • you will get a cryptic warning if you attempt to install base Python packages
reticulate::py_install("os")
Using virtual environment '/mnt/data/Dropbox/Teaching/reticulate_workshop/python-env' ...
+ '/mnt/data/Dropbox/Teaching/reticulate_workshop/python-env/bin/python' -m pip install --upgrade --no-user 'os'
Error: Error installing package(s): "'os'"

General tip: see if the software distributor has instructions, like the excellent ones from spacy:

If you see the $ in the beginning, these are command line/bash commands. Use the ```{bash} chunk option to run these commands and use the pip and python versions in your virtual environment (you could also activate the environment instead).

./python-env/bin/pip install -U pip setuptools wheel scikit-learn pandas bertopic sentence_transformers
./python-env/bin/pip install -U 'spacy[cuda-autodetect]'
./python-env/bin/python -m spacy download en_core_web_sm
./python-env/bin/python -m spacy download de_core_news_sm

On Windows, the binary files are in a different location:

./python-env/bin/pip install -U pip setuptools wheel scikit-learn pandas bertopic sentence_transformers
./python-env/bin/pip install -U 'spacy[cuda-autodetect]'
./python-env/bin/python -m spacy download en_core_web_sm
./python-env/bin/python -m spacy download de_core_news_sm

Workflow

In my opinion, a nice workflow is to use R and Python together in a Quarto Document. All you need to do to tell Quarto to run a Python, instead of an R chunk is to replace ```{r} with ```{python}.

text <- "Hello World! From R"
print(text)
[1] "Hello World! From R"
text = "Hello World! From Python"
print(text)
Hello World! From Python

You can even set up a shortcut to make these chunks (I like Ctrl+Alt+P):

To get an interactive Python session in your Console, you can use reticulate::repl_python().

As you’ve seen above, the code is pretty similar, with a few key differences:

  • = instead of <-
  • code formatting is part of the syntax!
  • base Python does not have data.frame class, instead you have dictionaries or the DataFrame from the Pandas package
  • Python lists are the equivalent of R vectors
  • the *apply family of functions and vectorised code does not exist as such – everything is a for loop!
  • a lot of packages are writing object oriented instead of functional code
  • many more!
my_list = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
my_list + 2 # does not work in Python
Error in py_call_impl(callable, dots$args, dots$keywords): TypeError: can only concatenate list (not "int") to list
for i in my_list:
    print(i + 2)
3
4
5
6
7
8
9
10
11
12
my_dict = {'name': ['John', 'Jane', 'Jim', 'Joan'],
          'age': [32, 28, 40, 35],
          'city': ['New York', 'London', 'Paris', 'Berlin']}
my_dict
{'name': ['John', 'Jane', 'Jim', 'Joan'], 'age': [32, 28, 40, 35], 'city': ['New York', 'London', 'Paris', 'Berlin']}

The truly magical thing about reticulate is how seamless it hands objects back and forth between Python and R:

py$text
[1] "Hello World! From Python"
py$my_list
 [1]  1  2  3  4  5  6  7  8  9 10
py$my_dict
$name
[1] "John" "Jane" "Jim"  "Joan"

$age
[1] 32 28 40 35

$city
[1] "New York" "London"   "Paris"    "Berlin"  
my_df <- data.frame(num = 1:10,
                    let = LETTERS[1:10])
my_list <- list(df = my_df, 11:20)
r.text
'Hello World! From R'
r.my_df
   num let
0    1   A
1    2   B
2    3   C
3    4   D
4    5   E
5    6   F
6    7   G
7    8   H
8    9   I
9   10   J
r.my_list
{'df':    num let
0    1   A
1    2   B
2    3   C
3    4   D
4    5   E
5    6   F
6    7   G
7    8   H
8    9   I
9   10   J, '': [11, 12, 13, 14, 15, 16, 17, 18, 19, 20]}

What I think is especially cool is that this even works with functions:

def hello(x=None):
  """
  :param x: name of the person to say hello to.
  """
  if not x:
    print("Hello World!")
  else:
    print("Hello " + x + "!")
py$hello()
py$hello("Class")
reticulate::py_help(py$hello)

Example 1: spaCy

The spacyr package is a good example for an R wrapper for a popular Python package. So comparing the functionality is a good venture point to understand what is happening. We can replicate the spacyr tutorial directly with reticulate to get going.

txt <- c(d1 = "spaCy is great at fast natural language processing.",
         d2 = "Mr. Smith spent two years in North Carolina. One in New York.")
doc_ids <- names(txt)
import spacy
nlp = spacy.load("en_core_web_sm")
doc = nlp(r.txt[1])
x = doc[1]
for token in doc:
  print(token.text, "|", token.lemma_, "|", token.pos_, "|", token.ent_type_)
Mr. | Mr. | PROPN | 
Smith | Smith | PROPN | PERSON
spent | spend | VERB | 
two | two | NUM | DATE
years | year | NOUN | DATE
in | in | ADP | 
North | North | PROPN | GPE
Carolina | Carolina | PROPN | GPE
. | . | PUNCT | 
One | one | NUM | CARDINAL
in | in | ADP | 
New | New | PROPN | GPE
York | York | PROPN | GPE
. | . | PUNCT | 
doc <- py$doc
doc
Mr. Smith spent two years in North Carolina. One in New York.
doc[1]
Smith
doc[1]$pos_
[1] "PROPN"
tibble::tibble(
  token = sapply(seq_along(doc) - 1, function(i) doc[i]$text),
  lemma = sapply(seq_along(doc) - 1, function(i) doc[i]$lemma_),
  pos = sapply(seq_along(doc) - 1, function(i) doc[i]$pos_),
  entity = sapply(seq_along(doc) - 1, function(i) doc[i]$ent_type_)
)
# A tibble: 14 × 4
   token    lemma    pos   entity    
   <chr>    <chr>    <chr> <chr>     
 1 Mr.      Mr.      PROPN ""        
 2 Smith    Smith    PROPN "PERSON"  
 3 spent    spend    VERB  ""        
 4 two      two      NUM   "DATE"    
 5 years    year     NOUN  "DATE"    
 6 in       in       ADP   ""        
 7 North    North    PROPN "GPE"     
 8 Carolina Carolina PROPN "GPE"     
 9 .        .        PUNCT ""        
10 One      one      NUM   "CARDINAL"
11 in       in       ADP   ""        
12 New      New      PROPN "GPE"     
13 York     York     PROPN "GPE"     
14 .        .        PUNCT ""        
def spacy_parse(doc_id, text):
  doc = nlp(text)
  toks = [] # make empty list to fill
  for sent_id, sent in enumerate(doc.sents): # loop over sentences
    for token in sent: # loop over tokens
      toks.append({
        "doc_id": doc_id,
        'sentence_id': sent_id + 1, # python numbers start at 0, we want to start at 1
        'token_id': token.i + 1,
        'token': token.text,
        'lemma': token.lemma_,
        'pos': token.pos_,
        'entity': token.ent_type_
        })
  return toks
py$spacy_parse(1, txt[2])[[1]]
$doc_id
[1] 1

$sentence_id
[1] 1

$token_id
[1] 1

$token
[1] "Mr."

$lemma
[1] "Mr."

$pos
[1] "PROPN"

$entity
[1] ""
library(tidyverse)
spacy_parse <- function(text, doc_id = names(text)) {
  result_list <- map2(doc_id, text, function(x, y) py$spacy_parse(x, y))
  map_df(unlist(result_list, recursive = FALSE), as_tibble)
}
spacy_parse(txt)
# A tibble: 23 × 7
   doc_id sentence_id token_id token      lemma      pos   entity
   <chr>        <int>    <int> <chr>      <chr>      <chr> <chr> 
 1 d1               1        1 spaCy      spacy      INTJ  ""    
 2 d1               1        2 is         be         AUX   ""    
 3 d1               1        3 great      great      ADJ   ""    
 4 d1               1        4 at         at         ADP   ""    
 5 d1               1        5 fast       fast       ADJ   ""    
 6 d1               1        6 natural    natural    ADJ   ""    
 7 d1               1        7 language   language   NOUN  ""    
 8 d1               1        8 processing processing NOUN  ""    
 9 d1               1        9 .          .          PUNCT ""    
10 d2               1        1 Mr.        Mr.        PROPN ""    
# … with 13 more rows

Example 2: NMF Topic Models from scikit-learn

Inspired by Text Mining with R

library(janeaustenr)
books <- austen_books() %>%
  mutate(paragraph = cumsum(text == "" & lag(text) != "")) %>%
  group_by(paragraph) %>%
  summarise(book = head(book, 1),
            text = trimws(paste(text, collapse = " ")),
            .groups = "drop")

glimpse(books)
Rows: 10,293
Columns: 3
$ paragraph <int> 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17…
$ book      <fct> Sense & Sensibility, Sense & Sensibility, Sense & Sensibilit…
$ text      <chr> "SENSE AND SENSIBILITY", "by Jane Austen", "(1811)", "CHAPTE…
library(tidytext)
austen_dfm <- books %>%
  unnest_tokens(output = feature, input = text) %>%
  count(book, paragraph, feature) %>%
  mutate(doc_id = paste0(book, "_", paragraph)) %>%
  cast_dfm(document = doc_id, term = feature, value = n)
sklearn <- import("sklearn")
model <- sklearn$decomposition$NMF(
  n_components = 6L,  # number of topics
  random_state  =  5L, # equivalent of seed for reproducibility
  max_iter = 400L
)$fit(austen_dfm)

beta <- model$components_
colnames(beta) <- colnames(austen_dfm)
rownames(beta) <- paste0("topic_", seq_len(nrow(beta)))
glimpse(beta)
 num [1:6, 1:14520] 0 0.62 2.888 0 0.888 ...
 - attr(*, "dimnames")=List of 2
  ..$ : chr [1:6] "topic_1" "topic_2" "topic_3" "topic_4" ...
  ..$ : chr [1:14520] "and" "sense" "sensibility" "austen" ...
gamma <- model$transform(austen_dfm)
colnames(gamma) <- paste0("topic_", seq_len(ncol(gamma)))
rownames(gamma) <- paste0("text_", seq_len(nrow(gamma)))
glimpse(gamma)
 num [1:10286, 1:6] 0 0.00177 0 0.00001 0.62077 ...
 - attr(*, "dimnames")=List of 2
  ..$ : chr [1:10286] "text_1" "text_2" "text_3" "text_4" ...
  ..$ : chr [1:6] "topic_1" "topic_2" "topic_3" "topic_4" ...
beta %>%
  as_tibble(rownames = "topic") %>%
  pivot_longer(cols = -topic, names_to = "feature", values_to = "beta") %>%
  mutate(topic = fct_inorder(topic)) %>%
  group_by(topic) %>%
  slice_max(beta, n = 10) %>%
  arrange(topic, -beta) %>%
  mutate(feature = reorder_within(feature, beta, topic)) %>%
  ggplot(aes(x = beta, y = feature, fill = topic)) +
  geom_col() +
  facet_wrap(~topic, ncol = 2, scales = "free") +
  theme_minimal() +
  labs(x = NULL, y = NULL, title = "Top-features per topic") +
  scale_y_reordered()

Example 3: BERTopic

I use the quanteda tutorial about topicmodels to show an example workflow for BERTopic.

library(quanteda.corpora)
corp_news <- download("data_corpus_guardian")[["documents"]]
from bertopic import BERTopic
from sentence_transformers import SentenceTransformer
from umap import UMAP

# confusingly, this is the setup part
topic_model = BERTopic(language="english",
                       top_n_words=5,
                       n_gram_range=(1, 2),
                       nr_topics="auto", # change if you want a specific nr of topics
                       calculate_probabilities=True,
                       umap_model=UMAP(random_state=42)) # make reproducible

# and only here we actually run something
topics, doc_topic = topic_model.fit_transform(r.corp_news.texts)

Unlike traditional topic models, BERTopic uses an algorithm that automatically determines a sensible number of topics and also automatically labels topics:

topic_model <- py$topic_model
topic_labels <- tibble(topic = as.integer(names(topic_model$topic_labels_)),
                       label = unlist(topic_model$topic_labels_ )) %>%
  mutate(label = fct_reorder(label, topic))
topic_labels
# A tibble: 69 × 2
   topic label                      
   <int> <fct>                      
 1    -1 -1_the_to_of_and           
 2     0 0_the_of_to_and            
 3     1 1_trump_clinton_the_in     
 4     2 2_nhs_the_patients_care    
 5     3 3_the_in_growth_of         
 6     4 4_the_to_syrian_of         
 7     5 5_bank_the bank_the_of     
 8     6 6_sales_black friday_the_to
 9     7 7_ebola_health_to_and      
10     8 8_to_you_the_it            
# … with 59 more rows

Note that -1 describes a trash topic with words and documents that do not really belong anywhere. BERTopic also supplies the top words, i.e., the ones that most likely belong to each topic. In the code above I requested 5 words for each topic:

top_words <- map_df(names(topic_model$topic_representations_), function(t) {
  map_df(topic_model$topic_representations_[[t]], function(y)
    tibble(feature = y[[1]], prob = y[[2]])) %>%
    mutate(topic = as.integer(t), .before = 1L)
})

We can plot them in the same way as above:

top_words %>%
  filter(topic %in% c(1, 7, 44, 53, 65, 66)) %>% # select a couple of topics
  left_join(topic_labels, by = "topic") %>%
  mutate(feature = reorder_within(feature, prob, topic)) %>%
  ggplot(aes(x = prob, y = feature, fill = topic, label = label)) +
  geom_col(show.legend = FALSE) +
  facet_wrap(vars(label), ncol = 2, scales = "free_y") +
  scale_y_reordered() +
  labs(x = NULL, y = NULL)

We can use a nice little visualization built into BERTopic to show how topics are linked to one another:

# map intertopic distance
intertopic_distance = topic_model.visualize_topics(width=700, height=700)
# save fig
intertopic_distance.write_html("python-in-r_files/figure-html/bert_corp_news_intertopic.html")
htmltools::includeHTML("python-in-r_files/figure-html/bert_corp_news_intertopic.html")

BERTopic also classifies documents into the topic categories (again not really how you should use LDA topicmodels). And provides a nice visualisation for trends over time. Unfortunately, the date format in R does not translate automagically to Python, hence we need to convert the dates to strings:

corp_news_t <- corp_news %>%
  mutate(date_chr = as.character(date))
topics_over_time = topic_model.topics_over_time(docs=r.corp_news_t.texts,
                                                timestamps=r.corp_news_t.date_chr,
                                                global_tuning=True,
                                                evolution_tuning=True,
                                                nr_bins=20)
#plot figure
fig_overtime = topic_model.visualize_topics_over_time(topics_over_time,
                                                      topics=[1, 7, 44, 53, 65, 66])
#save figure
fig_overtime.write_html("python-in-r_files/figure-html/fig_overtime.html")
htmltools::includeHTML("python-in-r_files/figure-html/fig_overtime.html")

Example 4: Supervised Learning with RoBERTa

To demonstrate the workflow of supvervised learning, I’m replicating the example from the naive bayes quanteda tutorial.

import pandas as pd
import os
import torch
from simpletransformers.classification import ClassificationModel

# args copied from grafzahl
model_args = {
  "num_train_epochs": 1, # increase for multiple runs, which can yield better performance
  "use_multiprocessing": False,
  "use_multiprocessing_for_evaluation": False,
  "overwrite_output_dir": True,
  "reprocess_input_data":  True,
  "overwrite_output_dir":  True,
  "fp16":  True,
  "save_steps":  -1,
  "save_eval_checkpoints":  False,
  "save_model_every_epoch":  False,
  "silent":  True,
}

os.environ["TOKENIZERS_PARALLELISM"] = "false"

roberta_model = ClassificationModel(model_type="roberta",
                                    model_name="roberta-base",
                                    # Use GPU if available
                                    use_cuda=torch.cuda.is_available(),
                                    args=model_args)
corp_movies <- quanteda.textmodels::data_corpus_moviereviews %>%
  tibble(quanteda::docvars(x = .), text = .)

corp_movies %>%
  count(sentiment)
# A tibble: 2 × 2
  sentiment     n
  <fct>     <int>
1 neg        1000
2 pos        1000
set.seed(1)
corp_movies_train <- corp_movies %>%
  slice_sample(prop = 0.9)

corp_movies_test <- corp_movies %>%
  filter(!id2 %in% corp_movies_train$id2)
# process data to the form simpletransformers needs
train_df = r.corp_movies_train
train_df['labels'] = train_df['sentiment'].astype('category').cat.codes
train_df = train_df[['text', 'labels']]

roberta_model.train_model(train_df)

# test data needs to be a list
test_l = r.corp_movies_test["text"].tolist()
predictions, raw_outputs = roberta_model.predict(test_l)
results <- tibble(
  truth = corp_movies_test$sentiment,
  estimate = factor(c("neg", "pos")[py$predictions + 1])
)
conf_mat <- yardstick::conf_mat(results, truth, estimate)
summary(conf_mat)
# A tibble: 13 × 3
   .metric              .estimator .estimate
   <chr>                <chr>          <dbl>
 1 accuracy             binary         0.735
 2 kap                  binary         0.457
 3 sens                 binary         0.598
 4 spec                 binary         0.852
 5 ppv                  binary         0.775
 6 npv                  binary         0.713
 7 mcc                  binary         0.468
 8 j_index              binary         0.450
 9 bal_accuracy         binary         0.725
10 detection_prevalence binary         0.355
11 precision            binary         0.775
12 recall               binary         0.598
13 f_meas               binary         0.675

Example 5: Zero-Shot Classification

Something I learned about recently are zero-shot classification models, which do not need to be trained on new categories, but can infer category-text relationships from the data they were trained with. You can get one such model from https://huggingface.co/MoritzLaurer/xlm-v-base-mnli-xnli.

from transformers import pipeline
classifier = pipeline("zero-shot-classification",
                      model="MoritzLaurer/xlm-v-base-mnli-xnli")

sequence_to_classify = "Angela Merkel ist eine Politikerin in Deutschland und Vorsitzende der CDU"
candidate_labels = ["politics", "economy", "entertainment", "environment"]
output = classifier(sequence_to_classify, candidate_labels, multi_label=False)
print(output)
{'sequence': 'Angela Merkel ist eine Politikerin in Deutschland und Vorsitzende der CDU', 'labels': ['politics', 'environment', 'economy', 'entertainment'], 'scores': [0.8248090744018555, 0.07267886400222778, 0.06934279948472977, 0.03316924721002579]}
zero_shot_classification <- function(text, labels) {
  res <- py$classifier(text, labels, multi_label=FALSE)
  map_df(seq_along(res), function(i) {
    as_tibble(res[[i]]) %>%
      mutate(id = i)
  }) %>%
    group_by(id) %>%
    slice_max(scores, n = 1)
}

set.seed(3)
test <- corp_movies_test %>%
  sample_n(10)

pred <- zero_shot_classification(
  as.character(test$text),
  c("negative", "positive")
)

results <- pred %>%
  ungroup() %>%
  mutate(estimate = factor(labels),
         estimate = fct_recode(estimate,
                               neg = "negative",
                               pos = "positive")) %>%
  mutate(truth = test$sentiment[1:10])

conf_mat <- yardstick::conf_mat(results, truth, estimate)
summary(conf_mat)
# A tibble: 13 × 3
   .metric              .estimator .estimate
   <chr>                <chr>          <dbl>
 1 accuracy             binary         0.6  
 2 kap                  binary         0.231
 3 sens                 binary         0.75 
 4 spec                 binary         0.5  
 5 ppv                  binary         0.5  
 6 npv                  binary         0.75 
 7 mcc                  binary         0.25 
 8 j_index              binary         0.25 
 9 bal_accuracy         binary         0.625
10 detection_prevalence binary         0.6  
11 precision            binary         0.5  
12 recall               binary         0.75 
13 f_meas               binary         0.6  

Further Learning

wrap up

Some information about the session.

Sys.time()
[1] "2023-02-14 10:01:45 CET"
sessionInfo()
R version 4.2.2 (2022-10-31)
Platform: x86_64-pc-linux-gnu (64-bit)
Running under: EndeavourOS

Matrix products: default
BLAS:   /usr/lib/libblas.so.3.11.0
LAPACK: /usr/lib/liblapack.so.3.11.0

locale:
 [1] LC_CTYPE=en_GB.UTF-8       LC_NUMERIC=C              
 [3] LC_TIME=en_GB.UTF-8        LC_COLLATE=en_GB.UTF-8    
 [5] LC_MONETARY=en_GB.UTF-8    LC_MESSAGES=en_GB.UTF-8   
 [7] LC_PAPER=en_GB.UTF-8       LC_NAME=C                 
 [9] LC_ADDRESS=C               LC_TELEPHONE=C            
[11] LC_MEASUREMENT=en_GB.UTF-8 LC_IDENTIFICATION=C       

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
 [1] quanteda.corpora_0.9.2 tidytext_0.3.4         janeaustenr_1.0.0     
 [4] forcats_0.5.2          stringr_1.5.0          dplyr_1.0.10          
 [7] purrr_1.0.0            readr_2.1.3            tidyr_1.2.1           
[10] tibble_3.1.8           ggplot2_3.4.0          tidyverse_1.3.2       
[13] reticulate_1.26       

loaded via a namespace (and not attached):
 [1] fs_1.5.2                    lubridate_1.9.0            
 [3] httr_1.4.4                  SnowballC_0.7.0            
 [5] tools_4.2.2                 backports_1.4.1            
 [7] utf8_1.2.2                  R6_2.5.1                   
 [9] DBI_1.1.3                   colorspace_2.0-3           
[11] yardstick_1.1.0             withr_2.5.0                
[13] tidyselect_1.2.0            compiler_4.2.2             
[15] glmnet_4.1-4                cli_3.6.0                  
[17] rvest_1.0.3                 SparseM_1.81               
[19] xml2_1.3.3                  labeling_0.4.2             
[21] scales_1.2.1                digest_0.6.30              
[23] rmarkdown_2.18              pkgconfig_2.0.3            
[25] htmltools_0.5.3             dbplyr_2.2.1               
[27] fastmap_1.1.0               htmlwidgets_1.5.4          
[29] rlang_1.0.6                 readxl_1.4.1               
[31] shape_1.4.6                 farver_2.1.1               
[33] generics_0.1.3              jsonlite_1.8.4             
[35] tokenizers_0.2.3            googlesheets4_1.0.1        
[37] magrittr_2.0.3              Matrix_1.5-3               
[39] Rcpp_1.0.9                  munsell_0.5.0              
[41] fansi_1.0.3                 lifecycle_1.0.3            
[43] stringi_1.7.12              yaml_2.3.6                 
[45] grid_4.2.2                  LiblineaR_2.10-12          
[47] crayon_1.5.2                lattice_0.20-45            
[49] haven_2.5.1                 splines_4.2.2              
[51] hms_1.1.2                   knitr_1.41                 
[53] pillar_1.8.1                codetools_0.2-18           
[55] stopwords_2.3               fastmatch_1.1-3            
[57] reprex_2.0.2                glue_1.6.2                 
[59] quanteda.textmodels_0.9.5-1 evaluate_0.18              
[61] RcppParallel_5.1.5          modelr_0.1.10              
[63] png_0.1-7                   vctrs_0.5.1                
[65] tzdb_0.3.0                  foreach_1.5.2              
[67] quanteda_3.2.3              cellranger_1.1.0           
[69] gtable_0.3.1                assertthat_0.2.1           
[71] xfun_0.35                   broom_1.0.1                
[73] survival_3.4-0              googledrive_2.0.0          
[75] gargle_1.2.1                iterators_1.0.14           
[77] timechange_0.1.1            ellipsis_0.3.2             
py_list_packages() %>% 
  as_tibble() %>% 
  select(-requirement) %>% 
  print(n = Inf)
# A tibble: 130 × 2
    package                    version                                          
    <chr>                      <chr>                                            
  1 "absl-py"                  "1.4.0"                                          
  2 "aiohttp"                  "3.8.4"                                          
  3 "aiosignal"                "1.3.1"                                          
  4 "altair"                   "4.2.2"                                          
  5 "appdirs"                  "1.4.4"                                          
  6 "async-timeout"            "4.0.2"                                          
  7 "attrs"                    "22.2.0"                                         
  8 "bertopic"                 "0.13.0"                                         
  9 "blinker"                  "1.5"                                            
 10 "blis"                     "0.7.9"                                          
 11 "cachetools"               "5.3.0"                                          
 12 "catalogue"                "2.0.8"                                          
 13 "certifi"                  "2022.12.7"                                      
 14 "charset-normalizer"       "3.0.1"                                          
 15 "click"                    "8.1.3"                                          
 16 "confection"               "0.0.4"                                          
 17 "cymem"                    "2.0.7"                                          
 18 "Cython"                   "0.29.33"                                        
 19 "datasets"                 "2.9.0"                                          
 20 "de-core-news-sm "         " https://github.com/explosion/spacy-models/rele…
 21 "decorator"                "5.1.1"                                          
 22 "dill"                     "0.3.6"                                          
 23 "docker-pycreds"           "0.4.0"                                          
 24 "en-core-web-sm "          " https://github.com/explosion/spacy-models/rele…
 25 "entrypoints"              "0.4"                                            
 26 "filelock"                 "3.9.0"                                          
 27 "frozenlist"               "1.3.3"                                          
 28 "fsspec"                   "2023.1.0"                                       
 29 "gitdb"                    "4.0.10"                                         
 30 "GitPython"                "3.1.30"                                         
 31 "google-auth"              "2.16.0"                                         
 32 "google-auth-oauthlib"     "0.4.6"                                          
 33 "grpcio"                   "1.51.1"                                         
 34 "hdbscan"                  "0.8.29"                                         
 35 "huggingface-hub"          "0.12.0"                                         
 36 "idna"                     "3.4"                                            
 37 "importlib-metadata"       "6.0.0"                                          
 38 "Jinja2"                   "3.1.2"                                          
 39 "joblib"                   "1.2.0"                                          
 40 "jsonschema"               "4.17.3"                                         
 41 "langcodes"                "3.3.0"                                          
 42 "llvmlite"                 "0.39.1"                                         
 43 "Markdown"                 "3.4.1"                                          
 44 "markdown-it-py"           "2.1.0"                                          
 45 "MarkupSafe"               "2.1.2"                                          
 46 "mdurl"                    "0.1.2"                                          
 47 "multidict"                "6.0.4"                                          
 48 "multiprocess"             "0.70.14"                                        
 49 "murmurhash"               "1.0.9"                                          
 50 "nltk"                     "3.8.1"                                          
 51 "numba"                    "0.56.4"                                         
 52 "numpy"                    "1.23.5"                                         
 53 "nvidia-cublas-cu11"       "11.10.3.66"                                     
 54 "nvidia-cuda-nvrtc-cu11"   "11.7.99"                                        
 55 "nvidia-cuda-runtime-cu11" "11.7.99"                                        
 56 "nvidia-cudnn-cu11"        "8.5.0.96"                                       
 57 "oauthlib"                 "3.2.2"                                          
 58 "packaging"                "23.0"                                           
 59 "pandas"                   "1.5.3"                                          
 60 "pathtools"                "0.1.2"                                          
 61 "pathy"                    "0.10.1"                                         
 62 "Pillow"                   "9.4.0"                                          
 63 "plotly"                   "5.13.0"                                         
 64 "preshed"                  "3.0.8"                                          
 65 "protobuf"                 "3.20.3"                                         
 66 "psutil"                   "5.9.4"                                          
 67 "pyarrow"                  "11.0.0"                                         
 68 "pyasn1"                   "0.4.8"                                          
 69 "pyasn1-modules"           "0.2.8"                                          
 70 "pydantic"                 "1.10.4"                                         
 71 "pydeck"                   "0.8.0"                                          
 72 "Pygments"                 "2.14.0"                                         
 73 "Pympler"                  "1.0.1"                                          
 74 "pynndescent"              "0.5.8"                                          
 75 "pyrsistent"               "0.19.3"                                         
 76 "python-dateutil"          "2.8.2"                                          
 77 "pytz"                     "2022.7.1"                                       
 78 "pytz-deprecation-shim"    "0.1.0.post0"                                    
 79 "PyYAML"                   "5.4.1"                                          
 80 "regex"                    "2022.10.31"                                     
 81 "requests"                 "2.28.2"                                         
 82 "requests-oauthlib"        "1.3.1"                                          
 83 "responses"                "0.18.0"                                         
 84 "rich"                     "13.3.1"                                         
 85 "rsa"                      "4.9"                                            
 86 "scikit-learn"             "1.2.1"                                          
 87 "scipy"                    "1.10.0"                                         
 88 "semver"                   "2.13.0"                                         
 89 "sentence-transformers"    "2.2.2"                                          
 90 "sentencepiece"            "0.1.97"                                         
 91 "sentry-sdk"               "1.15.0"                                         
 92 "seqeval"                  "1.2.2"                                          
 93 "setproctitle"             "1.3.2"                                          
 94 "simpletransformers"       "0.63.9"                                         
 95 "six"                      "1.16.0"                                         
 96 "smart-open"               "6.3.0"                                          
 97 "smmap"                    "5.0.0"                                          
 98 "spacy"                    "3.5.0"                                          
 99 "spacy-legacy"             "3.0.12"                                         
100 "spacy-loggers"            "1.0.4"                                          
101 "srsly"                    "2.4.5"                                          
102 "streamlit"                "1.18.1"                                         
103 "tenacity"                 "8.2.1"                                          
104 "tensorboard"              "2.12.0"                                         
105 "tensorboard-data-server"  "0.7.0"                                          
106 "tensorboard-plugin-wit"   "1.8.1"                                          
107 "thinc"                    "8.1.7"                                          
108 "threadpoolctl"            "3.1.0"                                          
109 "tokenizers"               "0.13.2"                                         
110 "toml"                     "0.10.2"                                         
111 "toolz"                    "0.12.0"                                         
112 "torch"                    "1.13.1"                                         
113 "torchvision"              "0.14.1"                                         
114 "tornado"                  "6.2"                                            
115 "tqdm"                     "4.64.1"                                         
116 "transformers"             "4.26.1"                                         
117 "typer"                    "0.7.0"                                          
118 "typing_extensions"        "4.4.0"                                          
119 "tzdata"                   "2022.7"                                         
120 "tzlocal"                  "4.2"                                            
121 "umap-learn"               "0.5.3"                                          
122 "urllib3"                  "1.26.14"                                        
123 "validators"               "0.20.0"                                         
124 "wandb"                    "0.13.10"                                        
125 "wasabi"                   "1.1.1"                                          
126 "watchdog"                 "2.2.1"                                          
127 "Werkzeug"                 "2.2.2"                                          
128 "xxhash"                   "3.2.0"                                          
129 "yarl"                     "1.8.2"                                          
130 "zipp"                     "3.13.0"